We introduce and develop a novel approach to outlier detection based onadaptation of random subspace learning. Our proposed method handles bothhigh-dimension low-sample size and traditional low-dimensional high-sample sizedatasets. Essentially, we avoid the computational bottleneck of techniques likeminimum covariance determinant (MCD) by computing the needed determinants andassociated measures in much lower dimensional subspaces. Both theoretical andcomputational development of our approach reveal that it is computationallymore efficient than the regularized methods in high-dimensional low-samplesize, and often competes favorably with existing methods as far as thepercentage of correct outlier detection is concerned.
展开▼